Session F-7

Distributed Learning

Conference
8:30 AM — 10:00 AM EDT
Local
May 19 Fri, 5:30 AM — 7:00 AM PDT
Location
Babbio 220

Matching DNN Compression and Cooperative Training with Resources and Data Availability

Francesco Malandrino (CNR-IEIIT, Italy); Giuseppe Di Giacomo (Politecnico Di Torino, Italy); Armin Karamzade (University of California Irvine, USA); Marco Levorato (University of California, Irvine, USA); Carla Fabiana Chiasserini (Politecnico di Torino & CNIT, IEIIT-CNR, Italy)

0
To make machine learning (ML) sustainable and apt to running on diverse devices where relevant data is, it is essential to compress ML models as needed, while still meeting the required learning quality and time performance. However, how much and when an ML model should be compressed, and where its training should be executed, are hard decisions to make, as they depend on the model itself, the resources of the available nodes, and the data such nodes own. We model the network system focusing on the training of DNNs, formalize the above problem, and, given its NP-hardness, propose an approximate dynamic programming problem that we solve through the PACT algorithmic framework. Importantly, the latter leverages a time-expanded graph representing the learning process, and a data-driven and theoretical approach for the prediction of the loss trajectories to be expected as a consequence of training decisions. We prove that PACT's solutions can get as close to the optimum as desired, at the cost of an increased time complexity, and that, in any case, its worst-case complexity is polynomial. Numerical results also show that, even under the most disadvantageous settings, PACT outperforms state-of-the-art alternatives and closely matches the minimum overall energy cost.
Speaker Carla Fabiana Chiasserini (Politecnico di Torino)

Carla Fabiana Chiasserini is an is an IEEE Fellow and a Full Professor at Politecnico di Torino, Italy. Her research interests include architectures, protocols, and performance analysis of wireless networks and networking support to machine learning.


On the Limit Performance of Floating Gossip

Gianluca Rizzo (HES SO Valais, Switzerland & Universita' di Foggia, Italy); Noelia Perez Palma (University of Murcia & University Carlos III, Spain); Vincenzo Mancuso and Marco G Ajmone Marsan (IMDEA Networks Institute, Spain)

0
In this paper we investigate the limit performance of Floating Gossip (FG), a new, fully distributed Gossip Learning (GL) scheme which relies on Floating Content (FC) to implement location-based probabilistic model storage in an infrastructure-less manner.

We consider dynamic scenarios where continuous learning is required, and we adopt a mean field approach to investigate the limit performance of FG in terms of amount of data that users can incorporate into their models, as a function of the main system parameters. Differently from existing approaches in which either communication or computing aspects of GL are analyzed and optimized, our approach accounts for the compound impact of both aspects. We validate our results through detailed simulations, proving good accuracy. Our model shows that Floating Gossip can be very effective in implementing continuous training and update of machine learning models in a cooperative manner, and based on opportunistic exchanges among moving users.
Speaker Gianluca Rizzo
Gianluca Rizzo received the degree in Electronic Engineering from Politecnico di Torino, Italy, in 2001. From September 2001 to December 2003, he has been a researcher in Telecom Italia Lab, Torino, Italy. From 2004 to 2008 he has been at EPFL Lausanne, where in 2008 he received his PhD in Computer Science. From 2009 to 2013 he has been Staff Researcher at Institute IMDEA Networks in Madrid, Spain. Since April 2013 he is Senior Researcher at HES SO Valais, Switzerland. His research interests are in performance evaluation of Computer Networks, and particularly on Network Calculus, and in Green Networking.



Communication-Aware DNN Pruning

Tong Jian, Debashri Roy, Batool Salehihikouei, Nasim Soltani, Kaushik Chowdhury and Stratis Ioannidis (Northeastern University, USA)

1
We propose a Communication-aware Pruning (CaP) algorithm, a novel distributed inference framework for distributing DNN computations across a physical network. Departing from conventional pruning methods, CaP takes the physical network topology into consideration and produces DNNs that are communication-aware, designed for both accurate and fast execution over such a distributed deployment. Our experiments on CIFAR-10 and CIFAR-100, two deep learning benchmark datasets, show that CaP beats state-of-the-art competitors by up to 4% w.r.t. accuracy. On experiments over real-world scenarios, it simultaneously reduces total execution time by 27%-68% at negligible performance decrease (less than 1%).
Speaker Tong Jian (Analog Devices; Northeastern University)

Tong Jian is a Machine Learning Scientist at Analog Devices, in Boston, MA, where she is working on AI for Science and building AI solutions for intelligent edge. She completed her Ph.D. in Computer Engineering from Northeastern University in Boston 2022, where she specialized in researching adversarial robustness and applied machine learning for wireless communication. During her Ph.D., she gained industry experience through internships at Nokia Bell Labs, where she worked on indoor WiFi localization, and at Amazon, focusing on improving their SOTA recommendation systems.


OPA: One-Predict-All For Efficient Deployment

Junpeng Guo, Shengqing Xia and Chunyi Peng (Purdue University, USA)

0
Deep neural network (DNN) has become a ubiquitous technique in mobile and embedded systems for various vision applications. The best-fit DNN is determined by various deployment factors like source data, compute power, and QoS requirements. A ''Train Once, Deploy Everywhere'' paradigm is proposed by providing a huge number of sub-networks (subnets) to fit different deployment scenarios. However, deployment factors are numerous and often dynamically changing leading to exponentially scenarios. It is computationally impossible to enumerate all the subnets for all scenarios to find the best-fit one. Existing works only work on a coarse granularity. They mostly consider static factors like hardware capabilities and fail to capture dynamics on both computing resources and data contents. In this work, we propose OPA to adapt to all deployment scenarios. The core idea is to run a shallow subnet as a pioneer to perceive the actual condition of the current deployment and use its performance to predict all other subnets'. At runtime, we quickly locate subnets that have similar latency with requirements and conduct a small-scale search. Compared to the state-of-the-art, OPA achieves up to 26% higher Top-1 accuracy for a given latency requirement.
Speaker Junpeng Guo (Purdue University)

Junpeng Guo is a Ph.D. candidate at Purdue University supervised by Prof.Chunyi Peng. His research interests are in the interdisciplinary field of mobile computing and computer vision, with a focus on building efficient mobile vision systems. He is currently seeking a summer internship in either a research lab or industry in the upcoming seasons.


Session Chair

Christopher G. Brinton

Session F-8

Network Design and Fault Tolerance

Conference
10:30 AM — 12:00 PM EDT
Local
May 19 Fri, 7:30 AM — 9:00 AM PDT
Location
Babbio 220

Distributed Demand-aware Network Design using Bounded Square Root of Graphs

Or Peres (Ben Gurion University, Israel); Chen Avin (Ben-Gurion University of the Negev, Israel)

2
While the traditional design of network topologies is demand oblivious, recent advances in reconfigurable networks enable real-time and dynamic communication network topologies, e.g., in datacenter networks. This trend motivates a new paradigm where topologies can adjust to the demand they need to serve. We consider the static version of this network design problem where the input is a requests distribution, D (demand-matrix), and a bound ∆, on the maximum degree of the output topology. In turn, the objective is to design an (undirected) demand-aware network N of bounded-degree ∆, which minimizes the expected path length (with respect to D).

This paper draws a connection between the k-root of graphs and the network design problem and uses forests-decomposition of the demand as the primary methodology. In turn, we provide new algorithms for demand-aware network design, including cases where our algorithms are (order) optimal and improve previous results. In addition, we provide, for the first time and for the case of bounded arboricity, i) an efficient distributed algorithm for the CONGEST model and ii) an efficient and PRAM-based parallel algorithm. We also present empirical results on real-world demand matrices where our algorithms produce both low-degree, and low expected path length network designs.
Speaker Chen Avin

Chen Avin is a Professor at the School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Israel. He received his MSc and Ph.D. in computer science from the University of California, Los Angeles (UCLA) in 2003 and 2006. Recently he served as the chair of the Communication Systems Engineering department at BGU. His current research interests are data-driven graphs and network algorithms, modeling, and analysis, emphasizing demand-aware networks, distributed systems, social networks, and randomized algorithms for networking.


A Fast and Exact Evaluation Algorithm for the Expected Number of Connected Nodes: an Enhanced Network Reliability Measure

Kengo Nakamura (NTT Corporation & Kyoto University, Japan); Takeru Inoue (NTT Network Innovation Labs., Japan); Masaaki Nishino and Norihito Yasuda (NTT Comunication Science Laboratories, Japan); Shin-ichi Minato (Kyoto University, Japan)

0
Contemporary society survives on several network infrastructures, such as communication and transportation. These network infrastructures are required to keep all nodes connected, although these nodes are occasionally disconnected due to failures. Thus, the expected number of connected node pairs (ECP) during an operation period is a reasonable reliability measure in network design. However, no work has studied ECP due to its computational hardness; we have to solve the reliability evaluation problem, which is a computationally tough problem, for O(n2) times where n is the number of nodes in a network.

This paper proposes an efficient method that exactly computes ECP. Our method performs dynamic programming just once without explicit repetition for each node pair and obtains an exact ECP value weighted by the number of users at each node. A thorough complexity analysis reveals that our method is faster than an existing reliability evaluation method, which can be transferred to ECP computation, by \(O(n)\). Numerical experiments using real topologies show great efficiency; e.g., our method computes the ECP of an 821-link network in ten seconds; the existing method cannot complete it in an hour. This paper also presents two applications: critical link identification and optimal resource (e.g., a server) placement.
Speaker Kengo Nakamura (NTT Corporation & Kyoto University)

Kengo Nakamura received the B.E. and M.E. degrees in information science and technology from the University of Tokyo, Japan, in 2016 and 2018. I am currently working as a researcher at NTT Communication Science Laboratories, Japan, and pursuing the Ph.D. degree from Kyoto University, Japan.


Network Slicing: Market Mechanism and Competitive Equilibria

Panagiotis Promponas and Leandros Tassiulas (Yale University, USA)

0
Towards addressing spectral scarcity and enhancing resource utilization in 5G networks, network slicing is a promising technology to establish end-to-end virtual networks without requiring additional infrastructure investments. By leveraging Software Defined Networks (SDN) and Network Function Virtualization (NFV), we can realize slices completely isolated and dedicated to satisfy the users' diverse Quality of Service (QoS) prerequisites and Service Level Agreements (SLAs). This paper focuses on the technical and economic challenges that emerge from the application of the network slicing architecture to real-world scenarios. We consider a market where multiple Network Providers (NPs), own the physical infrastructure and offer their resources to multiple Service Providers (SPs). Then, the SPs offer those resources as slices to their associated users. We propose a holistic iterative model for the network slicing market along with a clock auction that converges to a robust \(\epsilon\)-competitive equilibrium. At the end of each cycle of the market, the slices are reconfigured and the SPs aim to learn the private parameters of their users. Numerical results are provided that validate and evaluate the convergence of the clock auction and the capability of the proposed market architecture to express the incentives of the different entities of the system.
Speaker Panagiotis Promponas

Panagiotis Promponas (Graduate Student Member, IEEE) received the Diploma degree in electrical and computer engineering (ECE) from the National Technical University of Athens (NTUA), Greece, in 2019. He is currently a PhD student in the Electrical Engineering department at Yale University. Primarily, his research interests center around the field of resource allocation in constrained interdependent systems, with particular applications in the areas of quantum networks and wireless networks. He was a recipient of the Best Paper Award at the 12th IFIP WMNC 2019.


Tomography-based Progressive Network Recovery and Critical Service Restoration after Massive Failures

Viviana Arrigoni, Matteo Prata and Novella Bartolini (Sapienza University of Rome, Italy)

1
Massive failures in communication networks are a consequence of natural disasters, heavy blackouts, military and cyber attacks. We tackle the problem of minimizing the time and number of interventions to sufficiently restore the communication network so as to support emergency services after large-scale failures. We propose PRoTON (Progressive RecOvery and Tomography-based mONitoring), an efficient algorithm for progressive recovery of emergency services. Unlike previous work, assuming centralized routing and complete network observability, PRoTON addresses the more realistic scenario in which the network relies on the existing routing protocols, and knowledge of the network state is partial and uncertain. Simulation results carried out on real topologies show that our algorithm outperforms previous solutions in terms of cumulative routed flow, repair costs and recovery time in both static and dynamic failure scenarios.
Speaker Viviana Arrigoni (Sapienza University of Rome)

Viviana is a research fellow at the Department of Computer Science at Sapienza, University of Rome. She got her PhD from the same department and authored several research papers. Her research interests are networking, network monitoring and computational linear algebra.


Session Chair

Gabor Retvari

Session F-9

Memory/Cache Management 2

Conference
1:30 PM — 3:00 PM EDT
Local
May 19 Fri, 10:30 AM — 12:00 PM PDT
Location
Babbio 220

Two-level Graph Caching for Expediting Distributed GNN Training

Zhe Zhang, Ziyue Luo and Chuan Wu (The University of Hong Kong, Hong Kong)

0
Graph Neural Networks (GNNs) are increasingly popular due to excellent performance on learning graph-structured data in various domains. With fast expanding graph sizes and feature dimensions, distributed GNN training has been adopted, with multiple concurrent workers learning on different portions of a large graph. It has been observed that a main bottleneck in distributed GNN training lies in graph feature fetching across servers, which dominates the training time of each training iteration at each worker. This paper studies efficient feature caching on each worker to minimize feature fetching overhead, in order to expedite distributed GNN training. Current distributed GNN training systems largely adopt static caching of fixed neighbor nodes. We propose a novel two-level dynamic cache design exploiting both GPU memory and host memory at each worker, and design efficient two-level dynamic caching algorithms based on online optimization and a lookahead batching mechanism. Our dynamic caching algorithms consider node requesting probabilities and heterogeneous feature fetching costs from different servers, achieving an O(\log3 k) competitive ratio in terms of overall feature-fetching communication cost (where k is the cache capacity). We evaluate practical performance of our caching design with testbed experiments, and show that our design achieves up to 5.4x convergence speed-up.
Speaker Zhe Zhang (The University of Hong Kong)

Zhe Zhang is currently a Ph.D. candidate in the Department of Computer Science, The University of Hong Kong. She received her B.E. degree in 2019, from the Department of Computer Science and Technology, Zhejiang University. Her research interests include distributed machine learning algorithms and systems.


Galliot: Path Merging Based Betweenness Centrality Algorithm on GPU

Zheng Zhigao and Bo Du (Wuhan University, China)

1
Betweenness centrality (BC) is widely used to measure a vertex's significance by using the frequency of a vertex appearing in the shortest path between other vertices. However, most recent algorithms in BC computation suffer from the problem of high auxiliary memory consumption. To reduce BC computing's memory consumption, we propose a path-merging based algorithm called Galliot to calculate the BC values using GPU, which aims to minimize the on-broad memory consumption and enable the BC computation of large-scale graphs on GPU. The proposed algorithm requires O(n) space and run in O(mn) time on unweighted graphs. We present the theoretical principle for the proposed path merging method. Moreover, we propose a locality-oriented policy to maintain and update the worklist to improve GPU data locality. In addition, we conducted extensive experiments on NVIDIA GPUs to show the performance of Galliot. The results show that Galliot can process the larger graphs, which have 11.32× more vertices and 5.67× more edges than the graphs that recent works can process. Moreover, Galliot can achieve up to 38.77× speedup over the existing methods.
Speaker Xie Peichen

Xie Peichen is a postgraduate at Wuhan University and an undergraduate student at Xiamen University, focuses on high-performance computing and graph computing. He has published papers as co-author in INFOCOM 2023 and JSAC and both were accepted. He won the highest award of Outstanding Winner in the Mathematical Contest in Modeling in 2021.


Economic Analysis of Joint Mobile Edge Caching and Peer Content Sharing

Changkun Jiang (Shenzhen University, China)

0
Mobile edge caching (MEC) allows edge devices to cache popular contents and deliver to end-users directly, which can effectively alleviate increasing backbone loads and improve end-users' quality-of-service. Peer content sharing (PCS) enables edge devices to share cached contents with others, and can further increase caching efficiency. While many efforts have been made to content caching or sharing technologies, it remains open to study the complicated technical and economic interplay between both technologies. In this paper, we propose a joint MEC-PCS framework, and focus on capturing the strategic interactions between different types of edge devices. Specifically, we model their interactions as a non-cooperative game, where each device (player) can choose to be an agent (who caches and shares contents with others) or a requester (who doesn't cache but requests contents from others) of each content. We analyze the existence and uniqueness of the game equilibrium systematically under the generic usage-based pricing (for PCS). To address the incomplete information issue, we further design a behavior rule that allows players to achieve the equilibrium via a dynamic learning algorithm. Simulations show that the joint MEC-PCS framework can reduce the total system cost by 60%, compared with the benchmark pure caching system without PCS.
Speaker Changkun Jiang (Shenzhen University, China)

Changkun Jiang received his Ph.D. in Information Engineering from The Chinese University of Hong Kong in 2017. He is currently a faculty member in the College of Computer Science and Software Engineering at Shenzhen University, China. His research interests are primarily in artificial intelligence and economics for networked systems.


Enabling Switch Memory Management for Distributed Training with In-Network Aggregation

Bohan Zhao (Tsinghua University, China); Jianbo Dong and Zheng Cao (Alibaba Group, China); Wei Nie (Shenzhen University, unknown); Chang Liu (Shenzhen University, China); Wenfei Wu (Peking University, China)

0
Distributed training (DT) in shared clusters usually deploys a scheduler for resource allocation to multiple concurrent jobs. Meanwhile, a recent acceleration primitive, In-Network Aggregation (INA), introduces switch memory as a new critical resource for DT jobs, out of the prior scheduler's management. Lacking switch memory management leads to inefficient cluster resource usage. We build INAlloc, a switch memory management system for DT job schedulers to improve INA-empowered DT jobs in shared clusters. INAlloc adds a switch memory management layer to organize the physical switch memory, allocate memory to jobs, and provide friendly interfaces to schedulers. INAlloc incorporates switch memory into modeling a job's completion time (JCT) and its resources, which assists the scheduler in deciding the switch memory allocation. INAlloc overcomes the challenges of consistent and nondisruptive runtime switch memory reallocation. Our prototype and evaluation on real-world traces show that INAlloc can reduce the jobs' deadline miss ratio by 75% and JCT by 27%.
Speaker Bohan Zhao (Tsinghua University)

Bohan Zhao is a Ph.D. candidate at Tsinghua University. His research interests include programmable networks and the information infrastructure for distributed applications, such as machine learning, high-performance computing, and big data.


Session Chair

Gil Zussman

Session F-10

Federated Learning 6

Conference
3:30 PM — 5:00 PM EDT
Local
May 19 Fri, 12:30 PM — 2:00 PM PDT
Location
Babbio 220

A Reinforcement Learning Approach for Minimizing Job Completion Time in Clustered Federated Learning

Ruiting Zhou (Southeast University, China); Jieling Yu and Ruobei Wang (Wuhan University, China); Bo Li (Hong Kong University of Science and Technology, Hong Kong); Jiacheng Jiang and Libing Wu (Wuhan University, China)

0
Federated Learning (FL) enables potentially a large number of clients to collaboratively train a global model with the coordination of a central cloud server without exposing client raw data. However, the FL model convergence performance, often measured by the job completion time, is hindered by two critical factors: non independent and identically distributed (non-IID) data across clients and the straggler effect. In this work, we propose a clustered FL framework, MCFL, to minimize the job completion time by mitigating the influence of non-IID data and the straggler effect while guaranteeing the FL model convergence performance. MCFL builds upon a two-stage operation: i) a clustering algorithm constructs clusters, each containing clients with similar computing and communications capabilities to combat the straggler effect within a cluster; ii) a deep reinforcement learning (DRL) algorithm based on soft actor-critic with discrete actions intelligently selects a subset of clients from each cluster to mitigate the impact of non-IID data, and derives the number of intra-cluster aggregation iterations for each cluster to reduce the straggler effect among clusters. Experiments are conducted under various configurations to verify the efficacy of MCFL. The results show that MCFL can reduce the job completion time by up to 70%.
Speaker Jieling Yu(Wuhan University)

Jieling Yu received the BE degree from the School of Cyber Science and Engineering, Wuhan University, China, in 2021. She is currently working toward the master’s degree from the School of Cyber Science and Engineering, Wuhan University, China. Her research interests include edge computing, federated learning, online learning and network optimization.



AnycostFL: Efficient On-Demand Federated Learning over Heterogeneous Edge Devices

Peichun Li (Guangdong University of Technology, China & University of Macau, Macao); Guoliang Cheng and Xumin Huang (Guangdong University of Technology, China); Jiawen Kang (Nanyang Technological University, Singapore); Rong Yu (Guangdong University of Technology, China); Yuan Wu (University of Macau, Macao); Miao Pan (University of Houston, USA)

1
In this work, we investigate the challenging problem of on-demand federated learning (FL) over heterogeneous edge devices with diverse resource constraints. We propose a cost-adjustable FL framework, named AnycostFL, that enables diverse edge devices to efficiently perform local updates under a wide range of efficiency constraints. To this end, we design the model shrinking to support local model training with elastic computation cost, and the gradient compression to allow parameter transmission with dynamic communication overhead. An enhanced parameter aggregation is conducted in an element-wise manner to improve the model performance. Focusing on AnycostFL, we further propose an optimization design to minimize the global training loss with personalized latency and energy constraints.
By revealing the theoretical insights of the convergence analysis, personalized training strategies are deduced for different devices to match their locally available resources. Experiment results indicate that, when compared to the state-of-the-art efficient FL algorithms, our learning framework can reduce up to 1.9 times of the training latency and energy consumption for realizing a reasonable global testing accuracy. Moreover, the results also demonstrate that, our approach significantly improves the converged global accuracy.
Speaker Peichun Li (University of Macau)

Peichun Li received his M.S. degree from the Guangdong University of Technology. He is currently a research assistant at the University of Macau. His research interests include edge computing and deep learning, particularly in efficient algorithms for artificial intelligence applications.


AOCC-FL: Federated Learning with Aligned Overlapping via Calibrated Compensation

Haozhao Wang (Huazhong University of Science and Technology, China); Wenchao Xu and Yunfeng Fan (The Hong Kong Polytechnic University, China); Ruixuan Li (Huazhong University of Science and Technology, China); Pan Zhou (School of CSE, Huazhong University of Science and Technology, China)

1
Federated Learning enables collaboratively model training among a number of distributed devices with the coordination of a centralized server, where each device alternatively performs local gradient computation and communication to the server. FL suffers from significant performance degradation due to the excessive communication delay between the server and devices, especially when the network bandwidth of these devices is limited, which is common in edge environments. Existing methods overlap the gradient computation and communication to hide the communication latency to accelerate the FL training. However, the overlapping can also lead to an inevitable gap between the local model in each device and the global model in the server that seriously restricting the convergence rate of learning process. To address this problem, we propose a new overlapping method for FL, AOCC-FL, which aligns the local model with the global model via calibrated compensation such that the communication delay can be hidden without deteriorating the convergence performance. Theoretically, we prove that AOCC-FL admits the same convergence rate as the non-overlapping method. On both simulated and testbed experiments, we show that AOCC-FL achieves a comparable convergence rate relative to the non-overlapping method while outperforming the state-of-the-art overlapping methods.
Speaker Haozhao Wang

Haozhao Wang is currently doing postdoctoral research in the School of Computer Science and Technology at Huazhong University of Science and Technology. He obtained his Ph.D. from the same university in 2021 and obtained his bachelor's degree from the University of Electronic Science and Technology. He was a research assistant in the Department of Computing at The Hong Kong Polytechnic University. His research interests include Edge Learning and Federated Learning.


Joint Edge Aggregation and Association for Cost-Efficient Multi-Cell Federated Learning

Tao Wu (National University of Defense Technology, China); Yuben Qu (Nanjing University of Aeronautics and Astronautics, China); Chunsheng Liu (National University of Defense Technology, China); Yuqian Jing (Nanjing University Of Aeronautics And Astronautics, China); Feiyu Wu (Nanjing University of Aeronautics and Astronautics, China); Haipeng Dai (Nanjing University & State Key Laboratory for Novel Software Technology, China); Chao Dong (Nanjing University of Aeronautics and Astronautics, China); Jiannong Cao (Hong Kong Polytechnic Univ, Hong Kong)

0
Federated learning (FL) has been proposed as a promising distributed learning paradigm to realize edge artificial intelligence (AI) without revealing the raw data. Nevertheless, it would incur inevitable costs in terms of training latency and energy consumption, due to periodical communication between user equipments (UEs) and the remote central parameter server. Thus motivated, we study the joint edge aggregation and association problem to minimize the total cost, where the model aggregation over multiple cells just happens at the network edge. After proving its hardness with coupled variables, we transform it into a set function optimization problem that the objective function is neither submodular nor supermodular, which further complicates the problem. To tackle this difficulty, we first split it into multiple edge association subproblems, where the optimal computation resource allocation can be obtained in the closed form. We then construct a substitute function with the supermodularity and provable upper bound. On this basis, we reformulate an equivalent set function minimization problem under a matroid base constraint. We propose an approximation algorithm to the original problem based on the two-stage search strategy with theoretical performance guarantee. Both extensive simulations and field experiments are conducted to validate the effectiveness of our proposed solution.
Speaker Tao Wu (National University of Defense Technology, China



Session Chair

Jun Li


Gold Sponsor


Gold Sponsor


Bronze Sponsor


Student Travel Grants


Student Travel Grants


Local Organizer

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · © 2023 Duetone Corp.